home *** CD-ROM | disk | FTP | other *** search
- Path: solon.com!not-for-mail
- From: seebs@solutions.solon.com (Peter Seebach)
- Newsgroups: comp.os.msdos.programmer,comp.lang.c
- Subject: Re: open vs fopen?
- Date: 17 Feb 1996 00:59:59 -0600
- Organization: Usenet Fact Police (Undercover)
- Message-ID: <4g3udf$6pe@solutions.solon.com>
- References: <uEYFxc9nX8WX083yn@mbnet.mb.ca> <4ftusv$181@newshost.cyberramp.net> <danpop.824430285@rscernix> <4g39d4$ej8@newshost.cyberramp.net>
- NNTP-Posting-Host: solutions.solon.com
-
- In article <4g39d4$ej8@newshost.cyberramp.net>,
- John Noland <sinan@cyberramp.net> wrote:
- >In article <danpop.824430285@rscernix>, danpop@mail.cern.ch says...
- >>In <4ftusv$181@newshost.cyberramp.net> sinan@cyberramp.net (John Noland) writes:
- >>>The open() function and its ilk are normally referred to as the "low-level"
- >>>I/O package. fopen() is the "Buffered" or "Standard" I/O package. The
- >>>strength of the low-level I/O functions is that they offer excellent control,
- >>>particularly when used with binary files.
-
- >>??? What can read/write do on a binary file that fread/fwrite cannot do?
-
- >Where in the above did I compare them with each other?
-
- Well, "they offer excellent control, particularly when used with binary
- files" would imply that they can do things that the others can't.
-
- >>>If you have a special I/O need, you
- >>>can use the low-level I/O routines to fashion the exact I/O package to fit
- >>>your needs.
-
- >>Same question as above.
-
- >Same question from me again. Are the read/write functions written using fread/
- >fwrite or is it vice versa? I believe, and feel free to correct me if I'm
- >wrong, that the fread/fwrite functions are written using the read/write
- >functions.
-
- They are sometimes. Sometimes it's done the other way, and sometimes it's
- all done in terms of the native syscalls which may be different. But, you
- failed to answer his question: In what way can you customize the
- low level calls that you can't the high level calls? There are answers,
- but they have *nothing* to do with C.
-
- >>>The standard I/O package is one such creation. It is designed to provide fast
- >>>buffered I/O, mostly for text situations.
- >> ^^^^^^^^^^^^^^^^^^^^^^^^^^
- >>???
-
- >I use the stdio routines almost exclusively myself. Even for binary files.
- >That doesn't make the above statement wrong. Your statements seem to imply
- >that all systems are buffered at the OS level, so who cares what I/O
- >approach you use. You're not going to see any difference in I/O speed no
- >matter what. While this may be true for your particular situation, it's by
- >no means universal. The usefulness of buffering is in working sequentially
- >with a file. This makes buffered I/O well suited for text files. That
- >doesn't mean you can't or won't read a binary file sequentially or that it
- >sucks for use with binary files. Which, judging from what you've written,
- >is exactly what you'll think I meant.
-
- Well, binary files benefit just as dramatically from buffering. Simple
- test programs run on the order of 10 times faster for me using stdio instead
- of the low level calls, unless they consciously make an effort to act like
- the stdio buffering - in which case, why bother? stdio does it better than
- I do.
-
- >>>When fopen() is used, several things happen. The file, of course, is opened.
- >>>Second, an external character array is created to act as a buffer. The
- >>><stdio.h> file has this buffer set to a size of 512 bytes.
-
- >>To a default size of _minimum_ 256 bytes. I'm typing this text on a
- >>system which uses a default buffer size of 8192 bytes. The size of the
- >>buffer can be controlled by the programmer via the setvbuf function.
-
- >That's a big default buffer. How many disk accesses does it take to fill
- >that thing? I seriously doubt you would want a buffer that big when reading
- >a file randomly. But, maybe your system is so fast you can afford that kind
- >of waste. In my <stdio.h> file, the constant BUFSIZ is 512. You say that
- >you can control the size of the buffer using setvbuf(). That's true, but it
- >doesn't change the default buffer size as you seem to imply. If I do the
- >following:
-
- >FILE *input, *output;
-
- >input = fopen("myfile.in", "r+b");
- >setvbuf(input, NULL, _IOFBF, 1024);
-
- >output = fopen("myfile.out", "w+b");
-
- >do you think output points to a buffer of the size set by setvbuf? I don't
- >think this is what you meant.
-
- No, what he probably means is that, on his system, BUFSIZ is 8192. This
- is not atypical. It probably takes one disk access; it's a common disk
- block size on BSD-like unix systems, and probably on many others. In
- which case, it takes about the same time to read a 1k block as to read
- an 8k block.
-
- But your system's BUFSIZ has *nothing* to do with the language spec; as Dan
- says, 256 or larger is ok.
-
- >>The same applies to other operating systems (e.g. Unix) except that they
- >>do a much better job at buffering I/O than MSDOS.
-
- >Some versions of UNIX don't do any buffering at all.
-
- Well, they can be told specifically not to, but I haven't *ever* seen or
- used one that doesn't buffer pretty dramatically. I'm accustomed to
- working with projects with a megabyte of source, and doing entire edit/build/
- test cycles without disk activity. That's about a 6 meg buffer.
-
- >>So many inaccurate statements don't clarify things. On the contrary.
-
- >The shit you have to up with when trying to be helpful.
-
- Makes sense; if you say a lot of things which are only partially true, or
- untrue, you aren't being very helpful, and people are likely to complain.
-
- Not to mention, the whole low level IO thing is really not very much a
- part of C.
-
- -s
- --
- Peter Seebach - seebs@solon.com - Copyright 1995 Peter Seebach.
- C/Unix wizard -- C/Unix questions? Send mail for help. No, really!
- FUCK the communications decency act. Goddamned government. [literally.]
- The *other* C FAQ - ftp taniemarie.solon.com /pub/c/afq - Not A Flying Toy
-